26 research outputs found

    A Temporal Framework for Hypergame Analysis of Cyber Physical Systems in Contested Environments

    Get PDF
    Game theory is used to model conflicts between one or more players over resources. It offers players a way to reason, allowing rationale for selecting strategies that avoid the worst outcome. Game theory lacks the ability to incorporate advantages one player may have over another player. A meta-game, known as a hypergame, occurs when one player does not know or fully understand all the strategies of a game. Hypergame theory builds upon the utility of game theory by allowing a player to outmaneuver an opponent, thus obtaining a more preferred outcome with higher utility. Recent work in hypergame theory has focused on normal form static games that lack the ability to encode several realistic strategies. One example of this is when a player’s available actions in the future is dependent on his selection in the past. This work presents a temporal framework for hypergame models. This framework is the first application of temporal logic to hypergames and provides a more flexible modeling for domain experts. With this new framework for hypergames, the concepts of trust, distrust, mistrust, and deception are formalized. While past literature references deception in hypergame research, this work is the first to formalize the definition for hypergames. As a demonstration of the new temporal framework for hypergames, it is applied to classical game theoretical examples, as well as a complex supervisory control and data acquisition (SCADA) network temporal hypergame. The SCADA network is an example includes actions that have a temporal dependency, where a choice in the first round affects what decisions can be made in the later round of the game. The demonstration results show that the framework is a realistic and flexible modeling method for a variety of applications

    Accelerating Malware Detection via a Graphics Processing Unit

    Get PDF
    Real-time malware analysis requires processing large amounts of data storage to look for suspicious files. This is a time consuming process that (requires a large amount of processing power) often affecting other applications running on a personal computer. This research investigates the viability of using Graphic Processing Units (GPUs), present in many personal computers, to distribute the workload normally processed by the standard Central Processing Unit (CPU). Three experiments are conducted using an industry standard GPU, the NVIDIA GeForce 9500 GT card. The goal of the first experiment is to find the optimal number of threads per block for calculating MD5 file hash. The goal of the second experiment is to find the optimal number of threads per block for searching an MD5 hash database for matches. In the third experiment, the size of the executable, executable type (benign or malicious), and processing hardware are varied in a full factorial experimental design. The experiment records if the file is benign or malicious and measure the time required to identify the executable. This information can be used to analyze the performance of GPU hardware against CPU hardware. Experimental results show that a GPU can calculate a MD5 signature hash and scan a database of malicious signatures 82% faster than a CPU for files between 0 96 kB. If the file size is increased to 97 - 192 kB the GPU is 85% faster than the CPU. This demonstrates that the GPU can provide a greater performance increase over a CPU. These results could help achieve faster anti-malware products, faster network intrusion detection system response times, and faster firewall applications

    A large genome-wide association study of age-related macular degeneration highlights contributions of rare and common variants.

    Get PDF
    This is the author accepted manuscript. The final version is available from Nature Publishing Group via http://dx.doi.org/10.1038/ng.3448Advanced age-related macular degeneration (AMD) is the leading cause of blindness in the elderly, with limited therapeutic options. Here we report on a study of >12 million variants, including 163,714 directly genotyped, mostly rare, protein-altering variants. Analyzing 16,144 patients and 17,832 controls, we identify 52 independently associated common and rare variants (P < 5 × 10(-8)) distributed across 34 loci. Although wet and dry AMD subtypes exhibit predominantly shared genetics, we identify the first genetic association signal specific to wet AMD, near MMP9 (difference P value = 4.1 × 10(-10)). Very rare coding variants (frequency <0.1%) in CFH, CFI and TIMP3 suggest causal roles for these genes, as does a splice variant in SLC16A8. Our results support the hypothesis that rare coding variants can pinpoint causal genes within known genetic loci and illustrate that applying the approach systematically to detect new loci requires extremely large sample sizes.We thank all participants of all the studies included for enabling this research by their participation in these studies. Computer resources for this project have been provided by the high-performance computing centers of the University of Michigan and the University of Regensburg. Group-specific acknowledgments can be found in the Supplementary Note. The Center for Inherited Diseases Research (CIDR) Program contract number is HHSN268201200008I. This and the main consortium work were predominantly funded by 1X01HG006934-01 to G.R.A. and R01 EY022310 to J.L.H

    Effects of Anacetrapib in Patients with Atherosclerotic Vascular Disease

    Get PDF
    BACKGROUND: Patients with atherosclerotic vascular disease remain at high risk for cardiovascular events despite effective statin-based treatment of low-density lipoprotein (LDL) cholesterol levels. The inhibition of cholesteryl ester transfer protein (CETP) by anacetrapib reduces LDL cholesterol levels and increases high-density lipoprotein (HDL) cholesterol levels. However, trials of other CETP inhibitors have shown neutral or adverse effects on cardiovascular outcomes. METHODS: We conducted a randomized, double-blind, placebo-controlled trial involving 30,449 adults with atherosclerotic vascular disease who were receiving intensive atorvastatin therapy and who had a mean LDL cholesterol level of 61 mg per deciliter (1.58 mmol per liter), a mean non-HDL cholesterol level of 92 mg per deciliter (2.38 mmol per liter), and a mean HDL cholesterol level of 40 mg per deciliter (1.03 mmol per liter). The patients were assigned to receive either 100 mg of anacetrapib once daily (15,225 patients) or matching placebo (15,224 patients). The primary outcome was the first major coronary event, a composite of coronary death, myocardial infarction, or coronary revascularization. RESULTS: During the median follow-up period of 4.1 years, the primary outcome occurred in significantly fewer patients in the anacetrapib group than in the placebo group (1640 of 15,225 patients [10.8%] vs. 1803 of 15,224 patients [11.8%]; rate ratio, 0.91; 95% confidence interval, 0.85 to 0.97; P=0.004). The relative difference in risk was similar across multiple prespecified subgroups. At the trial midpoint, the mean level of HDL cholesterol was higher by 43 mg per deciliter (1.12 mmol per liter) in the anacetrapib group than in the placebo group (a relative difference of 104%), and the mean level of non-HDL cholesterol was lower by 17 mg per deciliter (0.44 mmol per liter), a relative difference of -18%. There were no significant between-group differences in the risk of death, cancer, or other serious adverse events. CONCLUSIONS: Among patients with atherosclerotic vascular disease who were receiving intensive statin therapy, the use of anacetrapib resulted in a lower incidence of major coronary events than the use of placebo. (Funded by Merck and others; Current Controlled Trials number, ISRCTN48678192 ; ClinicalTrials.gov number, NCT01252953 ; and EudraCT number, 2010-023467-18 .)

    Knowledge co‐production: A pathway to effective fisheries management, conservation, and governance

    No full text
    Although it is assumed that the outcomes from scientific research inform management and policy, the so-called knowledge–action gap (i.e., the disconnect between scientific knowledge and its application) is a recognition that there are many reasons why new knowledge is not always embraced by knowledge users. The concept of knowledge co-production has gained popularity within the environmental and conservation research communities as a mechanism of bridging the gap between knowledge and action, but has yet to be fully embraced in fisheries research. Here we describe what co-production is, outline its benefits (relative to other approaches to research) and challenges, and provide practical guidance on how to embrace and enact knowledge co-production within fisheries research. Because co-production is an iterative and context-dependent process, there is no single way to do it, but there are best practices that can facilitate the generation of actionable research through respectful and inclusive partnerships. We present several brief case studies where we describe examples of where co-production has worked in practice and the benefits it has accrued. As more members of the fisheries science and management community effectively engage in co-production, it will be important to reflect on the processes and share lessons with others. We submit that co-production has manifold benefits for applied science and should lead to meaningful improvements in fisheries management, conservation, and governance
    corecore